Momentum Iterative Fast Gradient Sign Algorithm for Adversarial Attacks and Defenses
P. Sathish Kumar1, K.V.D. Kiran2
1Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, AP, India.
2Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation,
Vaddeswaram, AP, India.
*Corresponding Author E-mail: pandaramsathishkumar@gmail.com
ABSTRACT:
Deep neural networks (DNNs) are particularly vulnerable to adversarial samples when used as machine learning (ML) models. These kinds of samples are typically created by combining real-world samples with low-level sounds so they can mimic and deceive the target models. Since adversarial samples may switch between many models, black-box type attacks can be used in a variety of real-world scenarios. The main goal of this project is to produce an adversarial assault (white box) using PyTorch and then offer a defense strategy as a countermeasure. We developed a powerful offensive strategy known as the MI-FGSM (Momentum Iterative Fast Gradient Sign Method). It can perform better than the I-FGSM because to its adaptation (Iterative Fast Gradient Sign Method). The usage of MI-FGSM will greatly enhance transferability. The other objective of this project is to combine machine learning algorithms with quantum annealing solvers for the execution of adversarial attack and defense. Here, we'll take model-based actions based on the existence of attacks. Finally, we provide the experimental findings to show the validity of the developed attacking method by assessing the strengths of various models as well as the defensive strategies.
KEYWORDS: Machine Learning, Pytorch, MI-FGSM, attacks, adversarial samples, and defense.
INTRODUCTION:
When it comes to the models relying on the conception of ML- Machine Learning, the attempts of adversarial-based attacks could often expose them to more risk factors as its inputs are being fabricated from the context of attacker preference1-3. Thus, the outcome from those models might be mostly inaccurate, which causes the rate of the false predictions to go high. Specifically, even the recent age DNNs- Deep Neural Networks realize risk factors due to these kinds of attacks based on slighter, human-unapparent input alterations 4,5 over and above its advantages served with regards to its applications 6 like object identification7,8; classifications of image9-12; speech recognition13,14; etc.
Much critically, several instances of researches are able to exhibit that the examples of adversarial attacks comprise considerable transfer capability5,15,16, therefore, the black-box type attacks feasible practically in realistic applications and further able to pose actual security challenges. In simpler terms, for attacking any intended black-box model containing unknown parameters as well as unknown arrangement, the attacking persons could easily construct a replica model locally and then deploy the adversarial samples determined by that local model for attacking the intended model. The conception of transferability is in accordance with the fact that several ML models learn identical decision boundaries revolving a data entity, empowering the adversarial samples that have been fabricated for one intended model to be effective for other related models too.
Whenever the adversarial samples result in crucial problems in the ML approach developments, wider range of approaches are found to be utilized for identifying and providing defense to the adversarial kind of attacks17-22. In case of DNNs, despite adversarial samples giving rise to flaw full classifications, the intermittent attributes are still able to exhibit considerable distinction intermediary to the adversarial as well as the actual samples17. Therefore, the adversarial attack identifications have now become possible. For instance, a nominal Gaussian distribution of attribute depiction in convoluted levels could filter out a considerable part of adversarial samples17, and network of the detector is proficient for identifying those kinds of errors19. Furthermore, when it comes to raise the strengths of DNNs, the training of the adversarial samples have proven to be effective approach4,18,22. With the inculcation of adversarial samples onto the training step, the model starts to learn for resisting the adversarial disruption in the loss function-specific gradient direction.
Figure 1 Depiction of general adversarial attack with ML [23]
In the above figure 1, we have shown the depiction of general adversarial attack with ML as per [23]. Any general model of ML fundamentally comprises of 2 levels like training as well as the decision. Likewise, there are certain times taken with regards to both of these levels. Therefore, the ML- oriented adversarial attacks happen in either during the time of training or during the time when making the decisions. The approaches utilized by the hacking persons concerning the adversarial ML could be categorized into 2 in accordance with this attack timing as per [23]:
· Model Poisoning- The hacking person steers to generate the wrongful labelling with the deployment of a few perturbated occasion once the model gets produced.
· Data Poisoning- The attacking person alters a few training input occasion labels for deceiving the outcome model.
Ever raising development computer- based approaches throughout the globe have made us to shift to the ML-based approaches from the conventional approaches. As the years passed on, the researchers have started realizing the importance of these ML approaches and as a result, they shifted their attention towards those approaches. Quantum Annealing Solver-based categorization approaches were adopted by many existing researches to serve both the defense and attack purposes with the deployment of ML and the way in which the deployments of these ML approaches are better than the conventionally adopted approaches. For instance, [24] had devised a specific arrangement for developing their model- named FGSM- Fast Gradient Sign Method, Despite being beneficial, the same approach was having a few challenges as well, which are as follows:
· Less accurate results.
· Difficult to scale up.
· Requirement for raised memory/ space.
· Time consuming.
Over and above the capabilities of the above said FGSM [4], the iterative approaches [18] are generally able to incorporate a raised level of success while making the attack against the white-box model. However, the adversarial samples produced by these kinds of iterative approaches are only able to transfer in a reduced rate only [18, 22], and therefore, the iterative approaches are making the black-box attacks much vulnerable. For the sake of redressing the challenges faced with these kinds of iterative approaches, MI-FGSM- Momentum Iterative- Fast Gradient Sign Method could be the go-to option. It could be theoretically exhibited that producing adversarial samples with this approach, it is possible to concurrently acquire raised levels of success in both the black-box as well as the white-box attacks. The said approach has the potential to eliminate the trade-off prevailing intermediary to the transferability and white-box type attacks, and it is able to be a more robust attacking procedure than the other approaches like iterative approaches and nominal FGSM. The said approach of MI-FGSM also eases the investigation of numerous ensemble methodologies that has introduced multi models, which is much useful while raising the rate of success in any attacks [15]. This thereby exhibits the soundness of the MI-FGSM with regards to the ensemble methodologies. Furthermore, the capabilities of ensembles using this MI-FGSM are much superior, which helps to avoid the existing bias and reduce the instabilities.
The major objectives of this work focusing on the improved attacks and defense using ML are as follows:
· To incorporate a novel white-box attack procedure known as MI-FGSM, in which the accumulation of loss function gradients at every iteration is done for stabilizing the optimization and to escape localized maxima.
· To utilize Quantum Annealing Solvers so that the utilization of ML is eased.
· To also integrate ML with the less superior approaches- FGSM and I-FGSM- Iterative Fast Gradient Sign Method along with more superior approach- MI-FGSM (realized as per the framed problem statement).
· To fancy the raise of rate of success both in black-box as well as the white-box attacks while raising the iterations.
· To in turn avoid the existing bias and reduce the instabilities so that predictions are accurate.
· To validate superior performance of the model devised achieving the successful integration of ML with all the three models like, FGSM, I-FGSM, and MI-FGSM to known the better approach amongst the three integrated approaches.
The rest of the section in this research paper will be arranged as follows: Section- II will be dealing with the relevant researches that have already proposed in the scholarly world with regards to both the attack and defense procedures; Section- III will be presenting the preliminaries for the currently pursued research work by discussing the basics of FGSM, white-box testing, and black-box testing along with the briefing of the implementation with Pytorch; Section- IV will be dealing with our devised model achieving the successful integration of ML with I-FGSM and MI-FGSM; Section-V will be presenting the simulation-based results to validate the superior performance of our devised approach with other two conventional approaches; and Section-VI will finally present the concluding remarks for the conduct of the entire research work concerned with the adversarial attacks and defense procedure using ML integrated with both I-FGSM and MI-FGSM.
In the field of machine learning, several articles have been written about adversarial assaults and their defenses. The majority of assaults concentrate on evasion, where the adversarial instances are discovered during testing, as opposed to poisoning, when the poisoned data is placed into the training set. The following is a collection of such publications that highlight the danger of adversarial attacks:
The first complete assessment of adversarial threats in reinforcement learning under AI security was presented by [25]. Also, this paper offers a concise overview of the most typical countermeasures for current adversarial attempts. [26] evaluated current discoveries on adversarial instances for deep neural networks, summarized the techniques for producing data samples, and proposed a taxonomy of these techniques. Applications for adversarial instances were examined within the taxonomy. On defenses against hostile examples, the research goes into further detail. Three significant problems are also explored, along with possible remedies, in examples of antagonistic relationships. The researches performed up to that point was summarized in this publication [27]. The study also made an effort to present the first analytic model for a methodical comprehension of adversarial assaults. The architecture was created from a cybersecurity standpoint to offer an attack and defensive lifecycle for adversaries.
[28] Provides a thorough overview of the most scientific studies performed on adversarial attacks against security systems built using machine learning techniques and highlights the dangers they provide. The article also classifies adversarial attack and defensive applications in the field of cyber security. The article concludes by highlighting certain traits found in recent research and discussing the influence of recent developments in other adversarial learning domains on potential future research paths in the field of cyber security. The most recent developments in research on adversarial attack and defensive methods in deep learning are summarized in [29]. The adversarial attack techniques were explained in this work for the training stage and testing stage, correspondingly. The research then categorized the uses of adversarial attack methods in the physical world, cyberspace security, natural language processing, and computer vision. The paper concludes by discussing the three primary types of current adversarial defensive techniques, including changing data, changing models, and utilizing the auxiliary tools. For the purpose of ensuring the security of machine learning algorithms, [30] investigates methods to create adversary robustly trained algorithms. In addition, the paper offers a taxonomy for categorizing adversarial assaults and defenses, formulates the Robust Optimization issue in a min-max context, and divides it into three subcategories: adversarial (re)training, regularization technique, and certified defenses. The most recent and significant developments in adversarial example production were then reviewed, together with defensive systems that used adversarial (re)Training as their primary line of protection against disturbances. It also examines techniques that include regularization terms, which alter the gradient's behavior and make it more difficult for attackers to succeed in their mission.
[31] Suggests a fine-grained taxonomy which is system-driven to clearly define the applications of ML and adversarial modeling techniques in order to enable independent researchers to duplicate trials and intensify the competition to create more advanced and reliable ML applications. The article offers taxonomies for the topics such as the architecture of ML, approach of the adversary, the dataset, the defense reaction and purpose, knowledge, and capacity of the adversary. Also, by suggesting an adversarial machine learning cycle, the connections between these models and taxonomies are examined.
[32] Provides a thorough review of adversary strikes and countermeasures in the actual physical environment. Before analyzing the difficulties faced by applications in actual contexts, it first studied these works that may successfully produce hostile instances in the digital world. The work of adversarial examples on tasks like speech recognition, target detection, and picture classification is then compared and summarized. For the three most common data forms such as the photos, graphs, and text, [33] evaluated the most recent techniques for producing adversarial instances and the defenses against adversarial examples. The first comprehensive examination of image-scaling assaults was offered by [34]. The paper conceptually examines the attacks from the standpoint of signal processing and determines that the interaction between down sampling and convolution is their underlying cause. Based on this discovery, the study explores three well-known machine learning imaging libraries like Pillow, OpenCV, and TensorFlow) and confirms the existence of this interaction in several scaling techniques. The work creates a unique protection against image-scaling assaults as a solution that thwarts all potential attack variations. Lastly, the study experimentally shows the effectiveness of this defense against both adaptive and non-adaptive attackers.
In addition to outlining a defense plan, [35] proposes an adversarial machine learning technique for carrying out jamming assaults on wireless communications. A transmitter in a cognitive radio network monitors the channels, spots possibilities in the spectrum, and sends data to the receiver in fixed channels. KuafuDet, a two-phase learning enhancement technique which detects the mobile malware through adversarial detection, was suggested by [36]. [37] Examined the issue of adversarial assaults against UAVs based on DL and proposed two adversary strike techniques for the regression models in UAVs. The results of the trials show that both the targeted and nontargeted assault approaches are capable of producing undetectable hostile pictures and posing a serious danger to UAV control and navigation.
[38] Investigated the use of adversarial machine learning for malware detection. In particular, by taking into account various contributions of the features to the classification problem, the research presents an effective evasion attack model (named EvnAttack) based on a learning-based classifier with the input of Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files. [39] provides the six most potent gradient-based adversarial assaults on the ResNet image recognition model and highlights the drawbacks of the conventional adversarial retraining approach. The research then proposes an innovative adversarial retraining technique-based ensemble defensive approach. The suggested technique can accurately detect more than 89.31% and as much as 96.24% of the six adversary strikes on the cifar10 dataset. The design techniques and experiments presented were considered to be broadly transferable to various machine learning, deep learning, and computation intelligence security fields.
The FGSM, which could be abbreviated as Fast Gradient Sign Method is nothing but a nominal and efficient approach when it comes to the generation of adversarial images. This approach was pioneered by the efforts of researcher named Good fellow along with his fellow research peers. They were able to explain and harness of the FGSM and adversarial samples as follows [40]:
· Consider any image as the input.
· The forecasting could be made with regards to the taken image with the deployment of trained CNN.
· Estimating the losses realized in the made forecasting by relying on the correct class label.
· Estimating the loss gradients with regards to the image that was taken as the input at the start.
· Estimating the sign with regards to the gradient.
· Utilizing the signed gradient for building the intended adversarial image as an output.
The above explained procedures have been pictorially indicated in the following figure 2:
Figure 2 Pictorial representation of FGSM showing the production of adversarial image [40]
As per [41, 42], it is nothing but a assessing approach relying on the intrinsic arrangement of any system or else any constituent. This kind of assessments often require the indulged personnel to possess better programming knowledge so that he/ she could comprehend the source code in a better way. This kind of assessment could be executed anytime once the code development is completed.
As per [41, 42], it is nothing but assessing approach that doesn’t keep the intrinsic arrangement of any system or else any constituent as reference while proceeding with the assessment. This kind of assessments often don’t require the indulged personnel to possess better programming knowledge since this assessment only assesses the basic system aspects without much detailing.
When it comes to the implementations with Pytorch, any indulged persons need to mention the model subjected to attack. Afterwards, he/ she needs to code that intended attack and finally execute a few assessments as applicable based on the diverse range of applications [43]. Whenever we need to use Pytorch for the implementation purposes, we need to known the following 3 inputs mandatorily [43]:
· use_cuda
· pretrained_model
· epsilons.
In this research work concerned with the adversarial attacks, we are integrating the ML with both with the less superior approach- I-FGSM and more superior approach- MI-FGSM. Furthermore, we have used the Quantum Annealing Solvers so that the utilization of ML was able to be eased while we execute both the attack and defense procedures. This fancies the increase of the rate of success both in black-box as well as the white-box attacks while raising the iterations. Furthermore, we could easily avoid the currently prevailing bias in the model taken into consideration and reduce the instabilities so that predictions made by those models could be able to be much accurate/ precise. In the subsequent section, we will validate superior performance of the devised model acquiring the successful integration of ML with I-FGSM and MI-FGSM.
The following are the advantages of our devised Adversarial attack and defense using ML:
· Has highest accuracy.
· Reduces time complexity.
· Better information on Relief features.
Figure 3 Block Diagram of research work concerned with Adversarial attacks and defenses.
The most basic block indicating the research work has been shown in the above figure 3.
The simulation outputs of the adversarial attacks and defense with regards to those adversarial samples of the FGSM, I-FGSM and MI-FGSM are given in this section to make the comprehensive comparative study.
The simulation outputs of the adversarial attacks for three different techniques such as FGSM, I-FGSM and MI-FGSM are attached here.
Figure 3 Representation of FGSM
The representation of FGSM (i.e., Fast Gradient Sign Method) for adversarial attack is shown in the above figure 4.
Figure 4 Accuracy values for FGSM
The accuracy values of the FGSM for adversarial attack are depicted in the above figure 5.
Figure 5 Accuracy Graph for FGSM
The graph plotted for the corresponding accuracy values of FGSM for adversarial attack is portrayed in the above figure 6.
Figure 6 Representation of I-FGSM
The representation of I-FGSM (i.e., Iterative-Fast Gradient Sign Method) for adversarial attack is shown in the above figure 7.
Figure 7 Accuracy values for I-FGSM
The accuracy values of the I-FGSM for adversarial attack are depicted in the above figure 8.
Figure 8 Accuracy Graph for I-FGSM
The graph plotted for the corresponding accuracy values of I-FGSM for adversarial attack is portrayed in the above figure 9. It can be seen that the I-FGSM is efficient than the FGSM.
Figure 9 Representation of MI-FGSM
The representation of MI-FGSM (i.e., Momentum Iterative-Fast Gradient Sign Method) for adversarial attack is shown in the above figure 10.
Figure 10 Accuracy values for MI-FGSM
The accuracy values of the MI-FGSM for adversarial attack are depicted in the above figure 11.
Figure 11 Accuracy Graph for MI-FGSM
The graph plotted for the corresponding accuracy values of MI-FGSM for adversarial attack is portrayed in the above figure 12. It can be seen that the MI-FGSM is efficient than both the FGSM and I-FGSM.
Figure 12 Fitting Model
The values of fitting model for adversarial attack are provided in the above figure 13.
Figure 13 Graph for fitting model
The graph plotted for the corresponding values of fitting model for adversarial attack is expressed in the above figure 14.
The simulation outputs of the defense against the adversarial attacks for three different techniques such as FGSM, I-FGSM and MI-FGSM are attached here.
Figure 14 Representation of FGSM
The representation of FGSM (i.e., Iterative-Fast Gradient Sign Method) for defense against the attacks is shown in the above figure 15.
Figure 15 Accuracy values for FGSM
The accuracy values of the FGSM for defense against the attacks are depicted in the above figure 16.
Figure 16 Accuracy Graph for FGSM
The graph plotted for the corresponding accuracy values of FGSM for defense against the attacks is portrayed in the above figure 17.
Figure 17 Representation of I-FGSM
The representation of I-FGSM (i.e., Iterative-Fast Gradient Sign Method) for defense against the attacks is shown in the above figure 18.
Figure 18 Accuracy values for I-FGSM
The accuracy values of the I-FGSM for defense against the attacks are depicted in the above figure 19.
Figure 19 Accuracy Graph for I-FGSM
The graph plotted for the corresponding accuracy values of I-FGSM for defense against the attacks is portrayed in the above figure 20. It can be seen that the I-FGSM is efficient than the FGSM.
Figure 20 Representation of MI-FGSM
The representation of MI-FGSM (i.e., Momentum Iterative-Fast Gradient Sign Method) for defense against the attacks is shown in the above figure 21.
Figure 21 Accuracy values for MI-FGSM
The accuracy values of the MI-FGSM for defense against the attacks are depicted in the above figure 22.
Figure 22 Accuracy Graph for MI-FGSM
The graph plotted for the corresponding accuracy values of MI-FGSM for defense against the attacks is portrayed in the above figure 23. It can be seen that the MI-FGSM is efficient than both the FGSM and I-FGSM.
Figure 23 Accuracy values for Network F
The accuracy values of the Network F for defense against the attacks are provided in the above figure 24.
Figure 24 Fitting Model for the accuracy of Network F
The graph plotted for the corresponding accuracy values of Network F for defense against the attacks is expressed in the above figure 25.
Figure 25 Accuracy values for Network F'
The accuracy values of the Network F' for defense against the attacks are provided in the above figure 26.
Figure 26 Fitting Model for the accuracy of Network F'
The graph plotted for the corresponding accuracy values of Network F' for defense against the attacks is expressed in the above figure 27.
In this project, we have used PyTorch to implement an adversarial attack (white box) and presented a defense method as a solution to this attack. We have devised a robust attacking approach known as MI-FGSM, which was able to be superior than I-FGSM due to the additional inclusion of the momentum term for stabilizing the update processes and thereby escape localized maxima while the production of adversarial samples. We have also integrated the Machine Learning Algorithms with Quantum Annealing Solvers for the execution of the Adversarial attack and defense. The validation conducted experimentally revealed that MI-FGSM was more robust attacking approach than its counterpart conventional approaches, namely FGSM as well as the I-FGSM for both the cases of black-box attacks and white-box attacks. Various outcomes of the said three approaches were demonstrated in the research work to show the soundness of the MI-FGSM with regards to the ensemble methodologies and its soundness over the other 2 approaches like FGSM and I-FGSM during the comparison.
CONFLICT OF INTEREST:
The authors have no conflicts of interest regarding this investigation.
ACKNOWLEDGMENTS:
The authors would like to thank Dr. Naga Malleswari, M. Tech Co-Ordinator for their kind support during project.
REFERENCES:
1. Dalvi, N., et al. Adversarial classification. in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. 2004.
2. Huang, L., et al. Adversarial machine learning. in Proceedings of the 4th ACM workshop on Security and artificial intelligence. 2011.
3. Lowd, D. and C. Meek. Good Word Attacks on Statistical Spam Filters. in CEAS. 2005.
4. Goodfellow, I.J., J. Shlens, and C.J.a.p.a. Szegedy, Explaining and harnessing adversarial examples. 2014.
5. Szegedy, C., et al., Intriguing properties of neural networks. 2013.
6. LeCun, Y., Y. Bengio, and G.J.n. Hinton, Deep learning. 2015. 521(7553): p. 436-444.
7. Ren, S., et al., Faster r-cnn: Towards real-time object detection with region proposal networks. 2015. 28.
8. Girshick, R., et al. Rich feature hierarchies for accurate object detection and semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
9. He, K., et al. Identity mappings in deep residual networks. in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14. 2016. Springer.
10. Krizhevsky, A., I. Sutskever, and G.E.J.C.o.t.A. Hinton, Imagenet classification with deep convolutional neural networks. 2017. 60(6): p. 84-90.
11. Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
12. Simonyan, K. and A.J.a.p.a. Zisserman, Very deep convolutional networks for large-scale image recognition. 2014.
13. Seide, F., G. Li, and D. Yu. Conversational speech transcription using context-dependent deep neural networks. in Twelfth annual conference of the international speech communication association. 2011.
14. Mohamed, A.-r., et al., Acoustic modeling using deep belief networks. 2011. 20(1): p. 14-22.
15. Liu, Y., et al., Delving into transferable adversarial examples and black-box attacks. 2016.
16. Moosavi-Dezfooli, S.-M., et al. Universal adversarial perturbations. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
17. Dong, Y., et al., Towards interpretable deep neural networks by leveraging adversarial examples. 2017.
18. Kurakin, A., I. Goodfellow, and S.J.a.p.a. Bengio, Adversarial machine learning at scale. 2016.
19. Metzen, J.H., et al., On detecting adversarial perturbations. 2017.
20. Pang, T., C. Du, and J.J.a.p.a. Zhu, Robust deep learning via reverse cross-entropy training and thresholding test. 2017. 3.
21. Papernot, N., et al. Distillation as a defense to adversarial perturbations against deep neural networks. in 2016 IEEE symposium on security and privacy (SP). 2016. IEEE.
22. Tramèr, F., et al., Ensemble adversarial training: Attacks and defenses. 2017.
23. Science, T.D., https://towardsdatascience.com/adversarial-machine-learning-mitigation-adversarial-learning-9ae04133c137. (Accessed on February 20, 2023), 2023.
24. Swathi, Y. and A. Sunitha, Monitoring Fake Profiles on Social Media.
25. Chen, T., et al., Adversarial attack and defense in reinforcement learning-from AI security view. 2019. 2: p. 1-22.
26. Yuan, X., et al., Adversarial examples: Attacks and defenses for deep learning. 2019. 30(9): p. 2805-2824.
27. Zhou, S., et al., Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity. 2022. 55(8): p. 1-39.
28. Rosenberg, I., et al., Adversarial machine learning attacks and defense methods in the cyber security domain. 2021. 54(5): p. 1-36.
29. Qiu, S., et al., Review of artificial intelligence adversarial attack and defense technologies. 2019. 9(5): p. 909.
30. Silva, S.H. and P.J.a.p.a. Najafirad, Opportunities and challenges in deep learning adversarial robustness: A survey. 2020.
31. Sadeghi, K., A. Banerjee, and S.K.J.I. Gupta, A system-driven taxonomy of attacks and defenses in adversarial machine learning. 2020. 4(4): p. 450-467.
32. Ren, H., et al., Adversarial examples: attacks and defenses in the physical world. 2021: p. 1-12.
33. Xu, H., et al., Adversarial attacks and defenses in images, graphs and text: A review. 2020. 17: p. 151-178.
34. Quiring, E., et al. Adversarial preprocessing: Understanding and preventing image-scaling attacks in machine learning. in Proceedings of the 29th USENIX Conference on Security Symposium. 2020.
35. Shi, Y., et al. Adversarial deep learning for cognitive radio security: Jamming attack and defense strategies. in 2018 IEEE international conference on communications workshops (ICC Workshops). 2018. IEEE.
36. Chen, S., et al., Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach. 2018. 73: p. 326-344.
37. Tian, J., et al., Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial Vehicles. 2021. 9(22): p. 22399-22409.
38. Chen, L., Y. Ye, and T. Bourlai. Adversarial machine learning in malware detection: Arms race between evasion attack and defense. in 2017 European intelligence and security informatics conference (EISIC). 2017. IEEE.
39. Mani, N., et al., Defending deep learning models against adversarial attacks. 2021. 13(1): p. 72-89.
40. Pyimagesearch,https://pyimagesearch.com/2021/03/01/adversarial-attacks-with-fgsm-fast-gradient-sign-method/. (Accessed on February 20, 2023), 2023.
41. Kumar, M., et al., A comparative study of black box testing and white box testing techniques. 2015. 3(10).
42. Spiceworks, https://www.spiceworks.com/tech/devops/articles/black-box-vs-white-box-testing/. (Accessed on February 20, 2023), 2023.
43. Pytorch, https://pytorch.org/tutorials/beginner/fgsm_tutorial.html. (Accessed on February 20, 2023), 2023.
|
Received on 13.05.2023 Accepted on 30.09.2023 ©A&V Publications all right reserved Research J. Engineering and Tech. 2023; 14(1):7-24. DOI: 10.52711/2321-581X.2023.00002 |
|